Web Survey Bibliography
Background: Self-administered survey questionnaires are an important data collection tool in clinical practice, public health research and epidemiology. They are ideal for achieving a wide geographic coverage of the target population, dealing with sensitive topics and are less resource-intensive than other data collection methods. These survey questionnaires can be delivered electronically, which can maximise the scalability and speed of data collection while reducing cost. In recent years, the use of apps running on consumer smart devices (i.e., smartphones and tablets) for this purpose has received considerable attention. However, variation in the mode of delivering a survey questionnaire could affect the quality of the responses collected. Objectives: To assess the impact that smartphone and tablet apps as a delivery mode have on the quality of survey questionnaire responses compared to any other alternative delivery mode: paper, laptop computer, tablet computer (manufactured before 2007), short message service (SMS) and plastic objects. Search methods: We searched MEDLINE, EMBASE, PsycINFO, IEEEXplore, Web of Science, CABI: CAB Abstracts, Current Contents Connect, ACM Digital, ERIC, Sociological Abstracts, Health Management Information Consortium, the Campbell Library and CENTRAL. We also searched registers of current and ongoing clinical trials such as ClinicalTrials.gov and the World Health Organization (WHO) International Clinical Trials Registry Platform. We also searched the grey literature in OpenGrey, Mobile Active and ProQuest Dissertation & Theses. Lastly, we searched Google Scholar and the reference lists of included studies and relevant systematic reviews. We performed all searches up to 12 and 13 April 2015. Selection criteria: We included parallel randomised controlled trials (RCTs), crossover trials and paired repeated measures studies that compared the electronic delivery of self-administered survey questionnaires via a smartphone or tablet app with any other delivery mode. We included data obtained from participants completing health-related self-administered survey questionnaire, both validated and non-validated. We also included data offered by both healthy volunteers and by those with any clinical diagnosis. We included studies that reported any of the following outcomes: data equivalence; data accuracy; data completeness; response rates; differences in the time taken to complete a survey questionnaire; differences in respondent's adherence to the original sampling protocol; and acceptability to respondents of the delivery mode. We included studies that were published in 2007 or after, as devices that became available during this time are compatible with the mobile operating system (OS) framework that focuses on apps.
Data collection and analysis
Two review authors independently extracted data from the included studies using a standardised form created for this systematic review in REDCap. They then compared their forms to reach consensus. Through an initial systematic mapping on the included studies, we identified two settings in which survey completion took place: controlled and uncontrolled. These settings differed in terms of (i) the location where surveys were completed, (ii) the frequency and intensity of sampling protocols, and (iii) the level of control over potential confounders (e.g., type of technology, level of help offered to respondents). We conducted a narrative synthesis of the evidence because a meta-analysis was not appropriate due to high levels of clinical and methodological diversity. We reported our findings for each outcome according to the setting in which the studies were conducted. Main results: We included 14 studies (15 records) with a total of 2275 participants; although we included only 2272 participants in the final analyses as there were missing data for three participants from one included study. Regarding data equivalence, in both controlled and uncontrolled settings, the included studies found no significant differences in the mean overall scores between apps and other delivery modes, and that all correlation coefficients exceeded the recommended thresholds for data equivalence. Concerning the time taken to complete a survey questionnaire in a controlled setting, one study found that an app was faster than paper, whereas the other study did not find a significant difference between the two delivery modes. In an uncontrolled setting, one study found that an app was faster than SMS. Data completeness and adherence to sampling protocols were only reported in uncontrolled settings. Regarding the former, an app was found to result in more complete records than paper, and in significantly more data entries than an SMS-based survey questionnaire. Regarding adherence to the sampling protocol, apps may be better than paper but no different from SMS. We identified multiple definitions of acceptability to respondents, with inconclusive results: preference; ease of use; willingness to use a delivery mode; satisfaction; effectiveness of the system informativeness; perceived time taken to complete the survey questionnaire; perceived benefit of a delivery mode; perceived usefulness of a delivery mode; perceived ability to complete a survey questionnaire; maximum length of time that participants would be willing to use a delivery mode; and reactivity to the delivery mode and its successful integration into respondents' daily routine. Finally, regardless of the study setting, none of the included studies reported data accuracy or response rates. Authors' conclusions: Our results, based on a narrative synthesis of the evidence, suggest that apps might not affect data equivalence as long as the intended clinical application of the survey questionnaire, its intended frequency of administration and the setting in which it was validated remain unchanged. There were no data on data accuracy or response rates, and findings on the time taken to complete a self-administered survey questionnaire were contradictory. Furthermore, although apps might improve data completeness, there is not enough evidence to assess their impact on adherence to sampling protocols. None of the included studies assessed how elements of user interaction design, survey questionnaire design and intervention design might influence mode effects. Those conducting research in public health and epidemiology should not assume that mode effects relevant to other delivery modes apply to apps running on consumer smart devices. Those conducting methodological research might wish to explore the issues highlighted by this systematic review.
Актуальность: Анкетирование является важным инструментом общественного здоровья и клинических исследований, поскольку оно является удобным способом сбора данных от большого числа опрашиваемых лиц, может затрагивать щекотливые вопросы и является менее ресурсоёмким по сравнению с другими методами сбора данных. Использование для анкетирования мобильных приложений на смартфонах или планшетных устройствах могло бы максимизировать масштабы и скорость получения данных и в то же самое время уменьшить стоимость. Однако, перед массовым использованием этих технологий, нам необходимо понять как это может повлиять на качество получаемых ответов. В частности, необходимо рассмотреть влияние качества получаемых данных на доказательную базу, на которой основываются многие решения в области общественного здоровья и здравоохранения. Задачи: В этом обзое Кокрейн мы оценили влияние использования мобильных приложений для анкетирования на различные аспекты качества получаемых ответов. Они включали: коэффициент участия, точность и полноту данных, время, необходимое для прохождения анкетирования и приемлемость для респондентов. Методы и результаты: Мы провели поиск исследований, опубликованных с января 2007 года по апрель 2015. Мы проанализировали данные, полученные в 14 исследованиях с участием 2272 человек. Мы не проводили мета-анализ из-за различий в этих исследованиях. Вместо этого мы описали результаты каждого исследования. Исследования проводились в двух вариантах: контролируемом и контролируемом. Первый случай относится к исследовательской или клинической среде, в которой медицинские работники могли лучше контролировать возможные вмешивающиеся факторы (конфаундеры), такие как местонахождение респондента и время суток, при которых осуществлялось анкетирование, тип используемой технологии и уровень доступной помощи для респондентов в случае возникновения технических трудностей. Неконтролируемые исследования проводились вне такой исследовательской или клинической среды (например, дома). Мы обнаружили, что в обоих случаях использование мобильных приложений может быть равнозначным использованию других методов, таких как вопросники на бумаге, в ноутбуках и СМС. Не ясно, могут ли мобильные приложения ускорять процесс по сравнению с другими методами. Между тем, наши данные предполагают, что такие факторы как характеристики исследуемого населения, дизайн вопросника и интерфейса могут несколько изменить этот вывод. Сведения по полноте данных и следованию протоколу исследования были приведены только в неконтролируемых исследованиях. Наши результаты показывают, что использование мобильных приложений может давать в результате более полные ряды данных и может улучшить следование протоколу сбора данных по сравнению с “бумажным” способом, но не с СМС. В исследованиях было множество определений приемлемости для респондентов, которые не могут быть стандартизированы для всех включенных исследований. Наконец, ни в одном из исследований не приводились данные по коэффициенту участия (частоте ответов) или по точности данных. Выводы: В целом, нет достаточных доказательств, чтобы давать четкие рекомендации по использованию мобильных приложений, могут ли он оказывать какой-либо эффект на результаты анкетирования. Равнозначность данных может не зависеть от использования мобильных приложений при условии, что предполагаемое клиническое использование анкетирования и предполагаемая частота его проведения одинаковы. В будущих исследованиях необходимо рассмотреть, каким образом система взаимодействия с пользователем, построение анкетирования и вмешательство могут повлиять на равнозначность данных и на другие исходы, оцененные в этом обзоре.
Web survey bibliography (345)
- Mind the Mode: Differences in Paper vs. Web-Based Survey Modes Among Women With Cancer; 2017; Hagan, T. L.; Belcher, S. M.; Donovan, H. S.
- Web-Based Survey Methodology; 2017; Wright, K. B.
- Handbook of Research Methods in Health Social Sciences; 2017; Liamputtong, P.
- Lessons from recruitment to an internet based survey for Degenerative Cervical Myelopathy: merits of...; 2017; Davies, B.; Kotter, M. R.
- Achieving Strong Privacy in Online Survey; 2017; Zhou, Yo.; Zhou, Yi.; Chen, S.; Wu, S. S.
- Measuring Subjective Health and Life Satisfaction with U.S. Hispanics; 2017; Lee, S.; Davis, R.
- The Effect of Respondent Commitment on Response Quality in Two Online Surveys; 2017; Cibelli Hibben, K.
- In search of best practices; 2017; Kappelhof, J. W. S.; Steijn, S.
- Rates, Delays, and Completeness of General Practitioners’ Responses to a Postal Versus Web-Based...; 2017; Sebo, P.; Maisonneuve, H.; Cerutti, B.; Pascal Fournier, J.; Haller, D. M.
- Web Health Monitoring Survey: A New Approach to Enhance the Effectiveness of Telemedicine Systems; 2017; Romano, M. F.; Sardella, M. V.; Alboni, F.
- Gathering Opinions on Depression Information Needs and Preferences: Samples and Opinions in Clinic Versus...; 2017; Bernstein, M. T.; Walker, J. R.; Sexton, K. A.; Katz, A.; Beatie, B. E.
- Oversampling as a methodological strategy for the study of self-reported health among lesbian, gay and...; 2017; Anderssen, N.; Malterud, K.
- Demographic Question Placement: Effect on Item Response Rates and Means of a Veterans Health Administration...; 2017; Teclaw, R.; Price, M.; Osatuke, K.
- Comparison of response patterns in different survey designs: a longitudinal panel with mixed-mode and...; 2017; Ruebsamen, N.; Akmatov, M. K.; Castell, S.; Karch, A.; Mikolajczyk, R. T.
- Survey mode influence on patient-reported outcome scores in orthopaedic surgery: telephone results may...; 2017; Hammarstedt, J. E.; Redmond, J. M.; Gupta, As.; Dunne, K. F.; Vemula, S. P.; Domb, B. G.
- Comparing Twitter and Online Panels for Survey Recruitment of E-Cigarette Users and Smokers; 2016; Guillory, J.; Kim, A.; Murphy, J.; Bradfield, B.; Nonnemaker, J.; Hsieh, Y. P.
- Web based health surveys: Using a Two Step Heckman model to examine their potential for population health...; 2016; Morrissey, K.; Kinderman, P.; Pontin, E.; Tai, S.; Schwannauer, M.
- “Better do not touch” and other superstitions concerning melanoma: the cross-sectional web...; 2016; Gajda, M.; Kamińska-Winciorek, G.; Wydmański, J.; Tukiendorf, A.
- Methods for Evaluating Respondent Attrition in Web-Based Surveys; 2016; Hochheimer, C. J.; Sabo, R. T.; Krist, A. H.; Day, T.; Cyrus, J.; Woolf, S. H.
- Question order sensitivity of subjective well-being measures: focus on life satisfaction, self-rated...; 2016; Lee, S.; McClain, C.; Webster, N.; Han, S.
- Using Visual Analogue Scales in eHealth: Non-Response Effects in a Lifestyle Intervention; 2016; Kuhlmann, T.; Reips, U.-D.; Wienert, J.; Lippke, S.
- A Case Study on the Use of Propensity Score Adjustments with Web Survey Data; 2016; Parsons, V.
- Using official surveys to reduce bias of estimates from nonrandom samples collected by web surveys; 2016; Beresovsky, V.; Dorfman, A.; Rumcheva, P.
- A Feasibility Study of Recruiting and Maintaining a Web Panel of People with Disabilities; 2016; Chandler, J.
- Exploration of Methods for Blending Unconventional Samples with Traditional Probability Samples; 2016; Gellar, J.; Zhou, H.; D.; Sinclair, M. D.
- Evaluation of mode equivalence of the MSKCC Bowel Function Instrument, LASA Quality of Life, and Subjective...; 2016; Bennett, A. V.; Keenoy, K.; Shouery, M.; Basch, E.; Temple, L. K.
- Population Survey Features and Response Rates: A Randomized Experiment; 2016; Guo, Y.; Kopec, J.; Cibere, J.; Li, L. C.; Goldsmith, C. H.
- Web Health Monitoring Survey: A New Approach to Enhance the Effectiveness of Telemedicine Systems ; 2016; Romano, M. F.; Sardella, M. V.; Alboni, F.
- Question order sensitivity of subjective well-being measures: focus on life satisfaction, self-rated...; 2016; Lee, S.; McClain, C.; Webster, N.; Han, S.
- Facebook, Twitter, & Qr codes: An exploratory trial examining the feasibility of social media mechanisms...; 2016; Gu, L. L.; Skierkowski, D.; Florin, P.; Friend, K.; Ye, Y.
- Helping respondents provide good answers in Web surveys; 2016; Couper, M. P.; Zhang, C.
- The Effects of a Delayed Incentive on Response Rates, Response Mode, Data Quality, and Sample Bias in...; 2016; McGonagle, K., Freedman, V. A.
- Feature phones no barrier to conducting an effective conjoint study ; 2016; de Rooij, R.; Dossin, R.
- Patient preference: a comparison of electronic patient-completed questionnaires with paper among cancer...; 2016; Martin, P.; Brown, M.C.; Espin‐Garcia, O.; Cuffe, S.; Pringle, D.; Mahler, M.; Villeneuve, J.;...
- Detecting Insufficient Effort Responding with an Infrequency Scale: Evaluating Validity and Participant...; 2016; Huang, J. L.; Bowling, N. A.; Liu, Me.; Li, Yu.
- On-line life history calendar and sensitive topics: A pilot study; 2016; Morselli, D.; Berchtold, A.; Granell, J.-C. S.; Berchtold, And.
- An experiment comparing grids and item-by-item formats in web surveys completed through PCs and smartphones...; 2016; Revilla, M.; Toninelli, D.; Ochoa, C.
- Assessing the Effects of Participant Preference and Demographics in the Usage of Web-based Survey Questionnaires...; 2016; Mlikotic, R.; Parker, B.; Rajapakshe, R.
- Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes; 2016; Chien, T. S.; Lin, W.S.
- Comparing online and telephone survey results in the context of a skin cancer prevention campaign evaluation...; 2016; Hollier, L.P.; Pettigrew, S.; Slevin, T.; Strickland, M.; Minto, C.
- Collecting Data from mHealth Users via SMS Surveys: A Case Study in Kenya; 2016; Johnson, D.
- Effect of a Post-paid Incentive on Response to a Web-based Survey; 2016; Brown, J. A.; Serrato, C. A.; Hugh, M.; Kanter, M. H.; A.; Spritzer, K. L.; Hays, R. D.
- Reducing Underreports of Behaviors in Retrospective Surveys: The Effects of Three Different Strategies...; 2016; Lugtig, P. J.; Glasner, T.; Boeve, A.
- Quantifying Under- and Overreporting in Surveys Through a Dual-Questioning-Technique Design. ; 2016; de Jong , M.; Fox, J.-P.; Steenkamp, J. - B. E. M.
- Take the money and run? Redemption of a gift card incentive in a clinician survey. ; 2016; Chen, J. S.; Sprague, B. L.; Klabunde, C. N.; Tosteson, A. N. A.; Bitton, A.; Onega, T.; MacLean, C....
- Creation and Usability Testing of a Web-Based Pre-Scanning Radiology Patient Safety and History Questionnaire...; 2016; Robinson, T. J.; DuVall, S.; Wiggins III, R
- A multi-group analysis of online survey respondent data quality: Comparing a regular USA consumer panel...; 2016; Golden, L.; Albaum, G.; Roster, C. A.; Smith, S. M.
- The effect of email invitation elements on response rate in a web survey within an online community; 2016; Petrovcic, A.; Petric, G.; Lozar Manfreda, K.
- Quota Controls in Survey Research.; 2016; Gittelman, S. H.; Thomas, R. K.; Lavrakas, P. J.; Lange, V.
- Internet-administered Health-related Quality of Life Questionnaires Compared With Pen and Paper in an...; 2016; Nitikman, M.; Mulpuri, K.; Reilly, C. W.